immigration policy
A Game-Theoretic Negotiation Framework for Cross-Cultural Consensus in LLMs
Zhang, Guoxi, Chen, Jiawei, Yang, Tianzhuo, Ji, Jiaming, Yang, Yaodong, Dai, Juntao
The increasing prevalence of large language models (LLMs) is influencing global value systems. However, these models frequently exhibit a pronounced WEIRD (Western, Educated, Industrialized, Rich, Democratic) cultural bias due to lack of attention to minority values. This monocultural perspective may reinforce dominant values and marginalize diverse cultural viewpoints, posing challenges for the development of equitable and inclusive AI systems. In this work, we introduce a systematic framework designed to boost fair and robust cross-cultural consensus among LLMs. We model consensus as a Nash Equilibrium and employ a game-theoretic negotiation method based on Policy-Space Response Oracles (PSRO) to simulate an organized cross-cultural negotiation process. To evaluate this approach, we construct regional cultural agents using data transformed from the World Values Survey (WVS). Beyond the conventional model-level evaluation method, We further propose two quantitative metrics, Perplexity-based Acceptence and Values Self-Consistency, to assess consensus outcomes. Experimental results indicate that our approach generates consensus of higher quality while ensuring more balanced compromise compared to baselines. Overall, it mitigates WEIRD bias by guiding agents toward convergence through fair and gradual negotiation steps.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Austria > Vienna (0.14)
- Asia > Thailand (0.04)
- (15 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Education (0.92)
- Government > Foreign Policy (0.68)
- (2 more...)
The Chatbot Disinfo Inflaming the LA Protests
In recent days, Los Angeles residents have taken to the streets to protest the Trump administration's immigration policies and the increasingly frequent ICE raids. WIRED's senior politics editor Leah Feiger joins Zoë Schiffer, director of business and industry, to discuss the related flood of information on social media, and how AI chatbots like Grok and ChatGPT are delivering incorrect and at times, inflammatory answers. Mentioned in today's episode: AI Chatbots Are Making LA Protest Disinformation Worse by David Gilbert I Joined Every Class Action Lawsuit I Could Find, and So Can You by Andy Vasoyan Vibe Coding Is Coming for Engineering Jobs by Will Knight Write to us at uncannyvalley@wired.com. You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. Note: This is an automated transcript, which may contain errors.
- Government > Immigration & Customs (1.00)
- Law > Litigation (0.92)
- Government > Regional Government > North America Government > United States Government (0.38)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
An Empirical Study of Group Conformity in Multi-Agent Systems
Choi, Min, Kim, Keonwoo, Chae, Sungwon, Baek, Sangyeob
Recent advances in Large Language Models (LLMs) have enabled multi-agent systems that simulate real-world interactions with near-human reasoning. While previous studies have extensively examined biases related to protected attributes such as race, the emergence and propagation of biases on socially contentious issues in multi-agent LLM interactions remain underexplored. This study explores how LLM agents shape public opinion through debates on five contentious topics. By simulating over 2,500 debates, we analyze how initially neutral agents, assigned a centrist disposition, adopt specific stances over time. Statistical analyses reveal significant group conformity mirroring human behavior; LLM agents tend to align with numerically dominant groups or more intelligent agents, exerting a greater influence. These findings underscore the crucial role of agent intelligence in shaping discourse and highlight the risks of bias amplification in online interactions. Our results emphasize the need for policy measures that promote diversity and transparency in LLM-generated discussions to mitigate the risks of bias propagation within anonymous online environments.
- North America > United States (0.28)
- North America > Canada (0.04)
- Government > Immigration & Customs (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.97)
- Law (0.97)
- (2 more...)
DnDScore: Decontextualization and Decomposition for Factuality Verification in Long-Form Text Generation
Wanner, Miriam, Van Durme, Benjamin, Dredze, Mark
The decompose-then-verify strategy for verification of Large Language Model (LLM) generations decomposes claims that are then independently verified. Decontextualization augments text (claims) to ensure it can be verified outside of the original context, enabling reliable verification. While decomposition and decontextualization have been explored independently, their interactions in a complete system have not been investigated. Their conflicting purposes can create tensions: decomposition isolates atomic facts while decontextualization inserts relevant information. Furthermore, a decontextualized subclaim presents a challenge to the verification step: what part of the augmented text should be verified as it now contains multiple atomic facts? We conduct an evaluation of different decomposition, decontextualization, and verification strategies and find that the choice of strategy matters in the resulting factuality scores. Additionally, we introduce DnDScore, a decontextualization aware verification method which validates subclaims in the context of contextual information.
- North America > United States > Alabama (0.05)
- Asia > Singapore (0.04)
- Europe > United Kingdom > Northern Ireland (0.04)
- (7 more...)
- Personal (0.68)
- Research Report (0.64)
BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of Large Language Models
Xue, Jiaqi, Zheng, Mengxin, Hu, Yebowen, Liu, Fei, Chen, Xun, Lou, Qian
Large Language Models (LLMs) are constrained by outdated information and a tendency to generate incorrect data, commonly referred to as "hallucinations." Retrieval-Augmented Generation (RAG) addresses these limitations by combining the strengths of retrieval-based methods and generative models. This approach involves retrieving relevant information from a large, up-to-date dataset and using it to enhance the generation process, leading to more accurate and contextually appropriate responses. Despite its benefits, RAG introduces a new attack surface for LLMs, particularly because RAG databases are often sourced from public data, such as the web. In this paper, we propose \TrojRAG{} to identify the vulnerabilities and attacks on retrieval parts (RAG database) and their indirect attacks on generative parts (LLMs). Specifically, we identify that poisoning several customized content passages could achieve a retrieval backdoor, where the retrieval works well for clean queries but always returns customized poisoned adversarial queries. Triggers and poisoned passages can be highly customized to implement various attacks. For example, a trigger could be a semantic group like "The Republican Party, Donald Trump, etc." Adversarial passages can be tailored to different contents, not only linked to the triggers but also used to indirectly attack generative LLMs without modifying them. These attacks can include denial-of-service attacks on RAG and semantic steering attacks on LLM generations conditioned by the triggers. Our experiments demonstrate that by just poisoning 10 adversarial passages can induce 98.2\% success rate to retrieve the adversarial passages. Then, these passages can increase the reject ratio of RAG-based GPT-4 from 0.01\% to 74.6\% or increase the rate of negative responses from 0.22\% to 72\% for targeted queries.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Alameda County > Oakland (0.04)
Scaling Political Texts with ChatGPT
We use GPT-4 to obtain position estimates of political texts in continuous spaces. We develop and validate a new approach by positioning British party manifestos on the economic, social, and immigration policy dimensions and tweets by members of the US Congress on the left-right ideological spectrum. For the party manifestos, the correlation between the positions produced by GPT-4 and experts is 93% or higher, a performance similar to or better than that obtained with crowdsourced position estimates. For individual tweets, the positions obtained with GPT-4 achieve a correlation of 91% with crowdsourced position estimates. For senators of the 117th US Congress, the positions obtained with GPT-4 achieve a correlation of 97% with estimates based on roll call votes and of 96% with those based on campaign funding. Correlations are also substantial within party, indicating that position estimates produced with GPT-4 capture within-party differences between senators. Overall, using GPT-4 for ideological scaling is fast, cost-efficient, and reliable. This approach provides a viable alternative to scaling by both expert raters and crowdsourcing.
- North America > United States (1.00)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Trump's freeze on new visas could threaten US dominance in AI
Even before president Trump's executive order on June 22, the US was already bucking global tech immigration trends. Over the past five years, as other countries have opened up their borders to highly skilled technical people, the US has maintained--and even restricted--its immigration policies, creating a bottleneck for meeting domestic demand for tech talent. Now Trump's decision to suspend a variety of work visas has left many policy analysts worried about what it could mean for long-term US innovation. In particular, the suspension of the H-1B, a three-year work visa granted to foreign workers in specialty fields and one of the primary channels for highly skilled tech workers to join the US workforce, could impact US dominance in critical technologies such as AI. "America's key competitors are going in a different direction," says Tina Huang, a research analyst at Georgetown's Center for Security and Emerging Technology (CSET).
The Search for AI Talent Is Being Hampered by Visa Processes - DZone AI
I wrote recently about the challenges small businesses face in recruiting the engineering talent they require to build the kind of high-tech ventures that will advance society. Complex and expensive visa processes often mean that the best talent goes to larger organizations that can support them through the immigration maze. It's a scenario that new research from Georgetown University finds replicated in the battle for AI talent. The report highlights how restrictive immigration policies are hampering the ability of American firms to recruit and retain the kind of AI talent they need. "Historically, immigrants have helped America lead the world in technological innovation," the authors say.
- Research Report > New Finding (0.54)
- Summary/Review (0.38)
America Desperately Needs AI Talent, Immigrants Included
DoD clearly has recognized artificial intelligence (AI) as the next game-changer in military competition, with the Pentagon and the services pouring money into numerous development programs. Indeed, mastering AI and machine learning will be crucial to the new way of war envisioned by Pentagon leadership: Multi-Domain Operations. But the US government may be shooting itself in the foot by overlooking a key problem: a lack of American AI specialists, argues Megan Lamberth co-author of "The American AI Century: A Blueprint for Action," a new report from the Center for New American Security. The United States is engaged in a global technology competition in artificial intelligence. But while the US government has shown commitment to developing AI systems that will positively transform the American economy and national security, the country has neglected its most important resource: talent.
- North America > United States (1.00)
- Asia > China (0.05)
Tech industry warns US immigration policy hurts American competitiveness in AI
A trade group representing some of the world's largest technology companies is urging governments to rethink their immigration policies or risk falling behind in artificial intelligence and machine learning. The issue is front-of-mind in the Seattle region where companies like Microsoft, Amazon, the Allen Institute for Artificial Intelligence (AI2), and others are working to develop a global AI hub. The Partnership on AI published a new paper asking policymakers to create a framework for AI experts, students, and technologists to travel more freely between countries. The nonprofit represents the companies above and nearly 100 other tech firms and research institutions. Though the paper was directed at lawmakers generally, President Donald Trump's immigration agenda is an undercurrent throughout.